758 research outputs found

    Exploring Multi-Modal Distributions with Nested Sampling

    Full text link
    In performing a Bayesian analysis, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multi-modal or exhibit pronounced (curving) degeneracies. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. Nested Sampling is a Monte Carlo method targeted at the efficient calculation of the evidence, but also produces posterior inferences as a by-product and therefore provides means to carry out parameter estimation as well as model selection. The main challenge in implementing Nested Sampling is to sample from a constrained probability distribution. One possible solution to this problem is provided by the Galilean Monte Carlo (GMC) algorithm. We show results of applying Nested Sampling with GMC to some problems which have proven very difficult for standard Markov Chain Monte Carlo (MCMC) and down-hill methods, due to the presence of large number of local minima and/or pronounced (curving) degeneracies between the parameters. We also discuss the use of Nested Sampling with GMC in Bayesian object detection problems, which are inherently multi-modal and require the evaluation of Bayesian evidence for distinguishing between true and spurious detections.Comment: Refereed conference proceeding, presented at 32nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineerin

    Analytic Continuation of Quantum Monte Carlo Data by Stochastic Analytical Inference

    Full text link
    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad-hoc assumptions introduced in similar algortihms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum entropy calculation

    Acceleration of energetic particles by large-scale compressible magnetohydrodynamic turbulence

    Full text link
    Fast particles diffusing along magnetic field lines in a turbulent plasma can diffuse through and then return to the same eddy many times before the eddy is randomized in the turbulent flow. This leads to an enhancement of particle acceleration by large-scale compressible turbulence relative to previous estimates in which isotropic particle diffusion is assumed.Comment: 13 pages, 3 figures, accepted for publication in Ap

    Nested sampling for Potts models

    Get PDF
    Nested sampling is a new Monte Carlo method by Skilling [1] intended for general Bayesian computation. Nested sampling provides a robust alternative to annealing-based methods for computing normalizing constants. It can also generate estimates of other quantities such as posterior expectations. The key technical requirement is an ability to draw samples uniformly from the prior subject to a constraint on the likelihood. We provide a demonstration with the Potts model, an undirected graphical model

    Consistent Application of Maximum Entropy to Quantum-Monte-Carlo Data

    Full text link
    Bayesian statistics in the frame of the maximum entropy concept has widely been used for inferential problems, particularly, to infer dynamic properties of strongly correlated fermion systems from Quantum-Monte-Carlo (QMC) imaginary time data. In current applications, however, a consistent treatment of the error-covariance of the QMC data is missing. Here we present a closed Bayesian approach to account consistently for the QMC-data.Comment: 13 pages, RevTeX, 2 uuencoded PostScript figure

    Maximum Entropy and Bayesian Data Analysis: Entropic Priors

    Full text link
    The problem of assigning probability distributions which objectively reflect the prior information available about experiments is one of the major stumbling blocks in the use of Bayesian methods of data analysis. In this paper the method of Maximum (relative) Entropy (ME) is used to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The argument is inspired and guided by intuition gained from the successful use of ME methods in statistical mechanics. For experiments that cannot be repeated the resulting "entropic prior" is formally identical with the Einstein fluctuation formula. For repeatable experiments, however, the expected value of the entropy of the likelihood turns out to be relevant information that must be included in the analysis. The important case of a Gaussian likelihood is treated in detail.Comment: 23 pages, 2 figure

    Quantifying the tension between the Higgs mass and (g-2)_mu in the CMSSM

    Full text link
    Supersymmetry has been often invoqued as the new physics that might reconcile the experimental muon magnetic anomaly, a_mu, with the theoretical prediction (basing the computation of the hadronic contribution on e^+ e^- data). However, in the context of the CMSSM, the required supersymmetric contributions (which grow with decreasing supersymmetric masses) are in potential tension with a possibly large Higgs mass (which requires large stop masses). In the limit of very large m_h supersymmetry gets decoupled, and the CMSSM must show the same discrepancy as the SM with a_mu . But it is much less clear for which size of m_h does the tension start to be unbearable. In this paper, we quantify this tension with the help of Bayesian techniques. We find that for m_h > 125 GeV the maximum level of discrepancy given current data (~ 3.3 sigma) is already achieved. Requiring less than 3 sigma discrepancy, implies m_h < 120 GeV. For a larger Higgs mass we should give up either the CMSSM model or the computation of a_mu based on e^+ e^-; or accept living with such inconsistency

    Low temperature properties of the infinite-dimensional attractive Hubbard model

    Full text link
    We investigate the attractive Hubbard model in infinite spatial dimensions by combining dynamical mean-field theory with a strong-coupling continuous-time quantum Monte Carlo method. By calculating the superfluid order parameter and the density of states, we discuss the stability of the superfluid state. In the intermediate coupling region above the critical temperature, the density of states exhibits a heavy fermion behavior with a quasi-particle peak in the dense system, while a dip structure appears in the dilute system. The formation of the superfluid gap is also addressed.Comment: 8 pages, 9 figure

    Model selection in cosmology

    Get PDF
    Model selection aims to determine which theoretical models are most plausible given some data, without necessarily considering preferred values of model parameters. A common model selection question is to ask when new data require introduction of an additional parameter, describing a newly discovered physical effect. We review model selection statistics, then focus on the Bayesian evidence, which implements Bayesian analysis at the level of models rather than parameters. We describe our CosmoNest code, the first computationally efficient implementation of Bayesian model selection in a cosmological context. We apply it to recent WMAP satellite data, examining the need for a perturbation spectral index differing from the scaleinvariant (Harrison–Zel'dovich) case
    corecore